Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. Moreover, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of semantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL.
translated by 谷歌翻译
最近,社区对模型缩放的关注越来越多,并有助于开发具有广泛尺度的模型家族。当前的方法要么简单地采用单发NAS的方式来构建非结构性和不可缩放的模型家族,要么依靠手动固定的缩放策略来扩展不必要的最佳基础模型。在本文中,我们桥接了两个组件,并将Scalenet提出到共同搜索基础模型和缩放策略,以便缩放大型模型可以具有更有希望的性能。具体来说,我们设计了一个超级植物,以体现具有不同尺寸频谱(例如拖鞋)的模型。然后,可以通过基于马尔可夫链的进化算法与基本模型进行交互学习缩放策略,并概括以开发更大的模型。为了获得一个体面的超级植物,我们设计了一种分层抽样策略,以增强其训练充足并减轻干扰。实验结果表明,我们的缩放网络在各种失败的方面都具有显着的性能优势,但搜索成本至少降低了2.53倍。代码可在https://github.com/luminolx/scalenet上找到。
translated by 谷歌翻译
由于缺乏电感偏见,视觉变压器(VIT)通常被认为比卷积神经网络(CNN)少。因此,最近的工作将卷积作为插件模块,并将其嵌入各种Vit对应物中。在本文中,我们认为卷积内核执行信息聚合以连接所有令牌。但是,如果这种明确的聚合能够以更均匀的方式起作用,则实际上是轻重量VIT的不必要的。受到这一点的启发,我们将Lightvit作为新的轻巧VIT家族,以在不卷积的情况下在纯变压器块上实现更好的准确性效率平衡。具体而言,我们将一个全球但有效的聚合方案引入了VIT的自我注意力和前馈网络(FFN),其中引入了其他可学习的令牌以捕获全球依赖性;在令牌嵌入上施加了双维通道和空间注意力。实验表明,我们的模型在图像分类,对象检测和语义分割任务上取得了重大改进。例如,我们的LightVit-T仅使用0.7G拖鞋的ImageNet上达到78.7%的精度,在GPU上的PVTV2-B0优于8.2%,而GPU的速度快11%。代码可在https://github.com/hunto/lightvit上找到。
translated by 谷歌翻译
Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD .
translated by 谷歌翻译
自从搜索空间通常相当巨大(例如,$ 13 ^ {21}),训练单次NAS方法中的一个良好的Supernet很难。为了提高超网络的评估能力,一个贪婪的策略是采样良好的路径,让超标倾向于良好的路径并减轻其评估负担。然而,在实践中,由于良好路径的识别不够准确并且采样路径仍然围绕整个搜索空间散射,因此搜索仍然是效率效率低下。在本文中,我们利用显式路径滤波器来捕获路径的特征,并直接过滤那些弱的路径,从而可以更加贪婪地且有效地在缩小空间上实现搜索。具体地,基于良好的路径小于空间中的弱者的事实,我们认为“弱道”的标签将比多道路采样中的“良好路径”更自信和可靠。通过这种方式,我们因此将路径滤波器的训练施放在正面和未标记的(PU)学习范例中,并且还鼓励一个\ Texit {路径嵌入}作为更好的路径/操作表示,以增强学习过滤器的识别容量。通过这种嵌入的DINT,我们可以通过将类似的嵌入式汇总相似的操作进一步缩小搜索空间,搜索可以更高效和准确。大量实验验证了所提出的方法GredynaSv2的有效性。例如,我们获得的GreedynaSv2-L验证$ 81.1 \%$ 1 $ top-1在想象数据数据上的准确性,显着优于Reset-50强的基线。
translated by 谷歌翻译
视觉变形金刚(VITS)继承了NLP的成功,但它们的结构尚未充分调查并针对视觉任务进行优化。最简单的解决方案之一是通过CNN中的广泛使用的神经结构搜索(NAS)直接搜索最佳的问题。但是,我们经验探讨了这种直接的适应将遇到灾难性的失败,并对超级形式的培训感到沮丧。在本文中,我们认为,由于VITS主要在令牌嵌入具有很小的归纳偏差上运行,因此不同架构的通道的不平衡将使重量共享假设恶化并导致培训不稳定。因此,我们开发了一种新的循环重量共享机制,用于令牌的VITS嵌入式,这使得每个通道能够更均匀地贡献所有候选架构。此外,我们还提出了身份转移,以减轻超级形式的多对一问题,并利用弱的增强和正规化技术以维持更稳定的培训。基于这些,我们所提出的方法Vitas在Deit-and Twins的Vits中取得了显着的优势。例如,只有1.4美元的G拖鞋预算,我们搜索的架构有3.3 \%$ ImageNet-比基准Deit为1美元$ k准确性。我们的结果达到3.0美元,我们的结果达到了82.0 \%$ 1 $ k,$ 1 $ k,$ 45.9 \%$ 2017 $上涨,这是2.4美元的$ 2.4 \%$优于其他VITS。
translated by 谷歌翻译
Facial attribute editing aims to manipulate single or multiple attributes of a face image, i.e., to generate a new face with desired attributes while preserving other details. Recently, generative adversarial net (GAN) and encoder-decoder architecture are usually incorporated to handle this task with promising results. Based on the encoder-decoder architecture, facial attribute editing is achieved by decoding the latent representation of the given face conditioned on the desired attributes. Some existing methods attempt to establish an attributeindependent latent representation for further attribute editing. However, such attribute-independent constraint on the latent representation is excessive because it restricts the capacity of the latent representation and may result in information loss, leading to over-smooth and distorted generation. Instead of imposing constraints on the latent representation, in this work we apply an attribute classification constraint to the generated image to just guarantee the correct change of desired attributes, i.e., to "change what you want". Meanwhile, the reconstruction learning is introduced to preserve attribute-excluding details, in other words, to "only change what you want". Besides, the adversarial learning is employed for visually realistic editing. These three components cooperate with each other forming an effective framework for high quality facial attribute editing, referred as AttGAN. Furthermore, our method is also directly applicable for attribute intensity control and can be naturally extended for attribute style manipulation. Experiments on CelebA dataset show that our method outperforms the state-of-the-arts on realistic attribute editing with facial details well preserved.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Learning to predict masked tokens in a sequence has been shown to be a powerful pretraining objective for large-scale language models. After training, such masked language models can provide distributions of tokens conditioned on bidirectional context. In this short draft, we show that such bidirectional conditionals often demonstrate considerable inconsistencies, i.e., they can not be derived from a coherent joint distribution when considered together. We empirically quantify such inconsistencies in the simple scenario of bigrams for two common styles of masked language models: T5-style and BERT-style. For example, we show that T5 models often confuse its own preference regarding two similar bigrams. Such inconsistencies may represent a theoretical pitfall for the research work on sampling sequences based on the bidirectional conditionals learned by BERT-style MLMs. This phenomenon also means that T5-style MLMs capable of infilling will generate discrepant results depending on how much masking is given, which may represent a particular trust issue.
translated by 谷歌翻译